Conversation
Greptile SummaryThis PR introduces a complete patrolling module for the Dimensional OS robot stack, enabling autonomous area patrol via three interchangeable routing strategies (coverage, frontier, random). It wires cleanly into the existing Key findings:
The core patrolling logic is well-structured, thread-safe, and backed by tests. These two issues are straightforward to fix and should be addressed before merge. Confidence Score: 4/5
Sequence DiagramsequenceDiagram
participant Agent as AI Agent / Skill
participant PM as PatrollingModule
participant Router as PatrolRouter (Coverage/Frontier/Random)
participant Nav as NavigationSystem
participant ROS as ROS Topics (odom / costmap / goal_reached)
Agent->>PM: start_patrol()
PM->>PM: spawn _patrol_thread
loop Patrol Loop
PM->>Router: next_goal()
Router->>Router: sample unvisited safe cells (Voronoi-weighted)
Router->>Router: run min_cost_astar for each candidate
Router-->>PM: PoseStamped goal (or None)
alt goal is None
PM->>PM: wait 2s or stop_event
else goal available
PM->>Nav: goal_request.publish(goal)
PM->>PM: _goal_or_stop_event.wait()
Nav-->>ROS: robot navigates
ROS-->>PM: goal_reached callback
PM->>PM: _goal_or_stop_event.set()
end
end
ROS-->>PM: odom callback → _latest_pose, Router.handle_odom()
ROS-->>PM: global_costmap callback → Router.handle_occupancy_grid() (throttled 60s)
Router->>Router: VisitationHistory.update_grid() + _rebuild()
Agent->>PM: stop_patrol()
PM->>PM: _stop_event.set(), _goal_or_stop_event.set()
PM->>Nav: goal_request.publish(_latest_pose) [cancel navigation]
PM->>PM: _patrol_thread.join()
Last reviewed commit: 1eec5e0 |
649db7b to
0e07984
Compare
7dc71ae to
a123517
Compare
ea7a704 to
dfb30a0
Compare
a4a61aa to
7c4e027
Compare
1bd23c3 to
0182ab6
Compare
bd956c9 to
716415d
Compare
|
I'd maybe add some of those nice pictures + a short description on how it works, into docs somewhere? |
|
|
||
| Args: | ||
| query: Description of the person to follow (e.g., "man with blue shirt") | ||
| initial_bbox: Optional pre-computed bounding box [x1, y1, x2, y2]. |
There was a problem hiding this comment.
this is exactly what Detection2d/bbox model is, should we use those for detections to follow?
I can imagine visual following making sense in many cases, would be good to support standard detection types?
There was a problem hiding this comment.
person_follow.py does support navigating by Detection3D, but it's not the default. Following by EdgeTAM detections works well enough in practice. With Detection3D the distance isn't estimated in all situations, for example if a person is detected in the distance, but no voxels have been detected there yet. Visual servoing works quite nice from close-up (feet only detections) to far away detections.
But this is probably not related to this PR so maybe it should be discussed further on an issue.
|
|
||
|
|
||
| class PatrollingModule(Module): | ||
| odom: In[PoseStamped] |
There was a problem hiding this comment.
Because tf does not have subscribe. You always have to request it.
| class PatrollingModule(Module): | ||
| odom: In[PoseStamped] | ||
| global_costmap: In[OccupancyGrid] | ||
| goal_reached: In[Bool] |
There was a problem hiding this comment.
this might be a good situation for jeffs RPC in which we feed nav module directly to this module as a dependancy, so it can call rpcs from it? we get nice type level dependance right? I haven't done this yet so slightly unsure
There was a problem hiding this comment.
I've used Jeff's RPC in multiple places, including this PR, but I don't think it makes sense here. In this case, I want the navigation module to notify me when the goal has been reached (or declared a failure). I can't do that via RPC.
| with self._patrol_lock: | ||
| if self._patrol_thread is not None and self._patrol_thread.is_alive(): | ||
| return "Patrol is already running. Use `stop_patrol` to stop." | ||
| self._planner_spec.set_replanning_enabled(False) |
There was a problem hiding this comment.
How come? don't we have dynamic env issue if we disable replanning?
There was a problem hiding this comment.
In normal navigation, when you set a goal, you need to get there. With patrolling, what's important is covering the patrol area, not necessarily getting to a specific point.
When setting a specific random target, there are issues that cause re-planning every once in a while (maybe someone got in the way, maybe some random noise blocked the path, maybe the goal was unreachable to begin with).
Instead of re-planning to get to the arbitrary target, it's a lot faster to just select another arbitrary target. If the goal is unreachable, the robot wastes a lot of time trying to get to a place that it cannot get to.
The router is in charge of making sure enough of the area is covered, so if the goal was actually important to get to, it will try to get there soon enough.
716415d to
d5e1d48
Compare
Problem
We need a way to patrol explored areas.
Closes DIM-659
Solution
Wrote a few patrolling algorithms, and I'm using
coverageby default because it looks the best.Wrote a test to visualize how well it covers the space (all tests are on the Hong Kong office scan from Ivan).:
saturation_thresholdandclearance_radius_m.candidates_to_consider.Breaking Changes
None.
How to Test
and tell it to start patrolling.
Issues
This doesn't always work because you get bad detections such as this (that's actually part of the chair):
It messes up the entire flow. We need persistent 3D detections, and maybe a 360 camera.
Contributor License Agreement